Conference Proceedings
User Variability and IR System Evaluation
P Bailey, A Moffat, F Scholer, P Thomas, R Baeza-Yates (ed.), M Lalmas (ed.), A Moffat (ed.), B Ribeiro-Neto (ed.)
Proc. 38th Ann. Int. ACM SIGIR Conf. on Research and Development in Information Retrieval | ACM | Published : 2015
Abstract
Test collection design eliminates sources of user variability to make statistical comparisons among information retrieval (IR) systems more affordable. Does this choice unnecessarily limit generalizability of the outcomes to real usage scenarios? We explore two aspects of user variability with regard to evaluating the relative performance of IR systems, assessing effectiveness in the context of a subset of topics from three TREC collections, with the embodied information needs categorized against three levels of increasing task complexity. First, we explore the impact of widely differing queries that searchers construct for the same information need description. By executing those queries, w..
View full abstractGrants
Awarded by Australian Research Council